Avoiding Wireheading with Value Reinforcement Learning

نویسندگان

  • Tom Everitt
  • Marcus Hutter
چکیده

How can we design good goals for arbitrarily intelligent agents? Reinforcement learning (RL) is a natural approach. Unfortunately, RL does not work well for generally intelligent agents, as RL agents are incentivised to shortcut the reward sensor for maximum reward – the so-called wireheading problem. In this paper we suggest an alternative to RL called value reinforcement learning (VRL). In VRL, agents use the reward signal to learn a utility function. The VRL setup allows us to remove the incentive to wirehead by placing a constraint on the agent’s actions. The constraint is defined in terms of the agent’s belief distributions, and does not require an explicit specification of which actions constitute wireheading.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Coordination of multiple behaviors acquired by a vision-based reinforcement learning

A method is proposed which accomplishes a whole task consisting of plural subtasks by coordinating multiple behaviors acquired by a vision-based reinforcement learning. First, individual behaviors which achieve the corresponding subtasks are independently acquired by Q-learning, a widely used reinforcement learning method. Each learned behavior can be represented by an action-value function in ...

متن کامل

Using Reinforcement Learning to Introduce Artificial Intelligence in the Cs Curriculum

There are many interesting topics in artificial intelligence that would be useful to stimulate student interest at various levels of the computer science curriculum. They can also be used to illustrate some basic concepts of computer science, such as arrays. One such topic is reinforcement learning – teaching a computer program how to play a game or traverse an environment using a system of rew...

متن کامل

Avoiding Confusion between Predictors and Inhibitors in Value Function Approximation

Reinforcement learning treats each input, feature, or stimulus as having a positive or negative reward value. Some stimuli, however, negate or inhibit the values of certain other predictors (excitors) when presented with them, but are otherwise neutral. We show that both linear and non-linear value-function approximators assign inhibitory features a strong value with the opposite valence of the...

متن کامل

Reinforcement Learning for Penalty Avoiding Policy Making and its Extensions and an Application to the Othello Game

The purpose of reinforcement learning system is to learn optimal policies in general. However, from the engineering point of view, it is useful and important to acquire not only optimal policies, but also penalty avoiding policies. In this paper, we are focused on formation of penalty avoiding policies based on the Penalty Avoiding Rational Policy Making algorithm [1]. In applying the algorithm...

متن کامل

A Vision-Based Reinforcement Learning For Coordination Of Soccer Playing Behaviors

A method is proposed which acquires a purposive behavior of shooting a ball into the goal avoiding collisions with an enemy. In [ Asada et al., 1994 ] , we have presented the soccer robot which learned to shoot a ball into the goal without any enemy, using the Q-learning, one of the reinforcement learning methods. Since a simple extension of the method is not practical due to its huge state spa...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2016